20 research outputs found

    Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image

    Full text link
    Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. -e proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively.Comment: 13 pages, 13 figure

    Curvelet based offline analysis of SEM images.

    No full text
    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm

    Image based analysis of meibomian gland dysfunction using conditional generative adversarial neural network

    No full text
    Objective Meibomian gland dysfunction (MGD) is a primary cause of dry eye disease. Analysis of MGD, its severity, shapes and variation in the acini of the meibomian glands (MGs) is receiving much attention in ophthalmology clinics. Existing methods for diagnosing, detection and analysing meibomianitis are not capable to quantify the irregularities to IR (infrared) images of MG area such as light reflection, interglands and intraglands boundaries, the improper focus of the light and positioning, and eyelid eversion.Methods and analysis We proposed a model that is based on adversarial learning that is, conditional generative adversarial network that can overcome these blatant challenges. The generator of the model learns the mapping from the IR images of the MG to a confidence map specifying the probabilities of being a pixel of MG. The discriminative part of the model is responsible to penalise the mismatch between the IR images of the MG and confidence map. Furthermore, the adversarial learning assists the generator to produce a qualitative confidence map which is transformed into binary images with the help of fixed thresholding to fulfil the segmentation of MG. We identified MGs and interglands boundaries from IR images.Results This method is evaluated by meiboscoring, grading, Pearson correlation and Bland-Altman analysis. We also judged the quality of our method through average Pompeiu-Hausdorff distance, and Aggregated Jaccard Index.Conclusions This technique provides a significant improvement in the quantification of the irregularities to IR. This technique has outperformed the state-of-art results for the detection and analysis of the dropout area of MGD

    A QoS-Aware Machine Learning-Based Framework for AMI Applications in Smart Grids

    No full text
    The Internet of things (IoT) enables a diverse set of applications such as distribution automation, smart cities, wireless sensor networks, and advanced metering infrastructure (AMI). In smart grids (SGs), quality of service (QoS) and AMI traffic management need to be considered in the design of efficient AMI architectures. In this article, we propose a QoS-aware machine-learning-based framework for AMI applications in smart grids. Our proposed framework comprises a three-tier hierarchical architecture for AMI applications, a machine-learning-based hierarchical clustering approach, and a priority-based scheduling technique to ensure QoS in AMI applications in smart grids. We introduce a three-tier hierarchical architecture for AMI applications in smart grids to take advantage of IoT communication technologies and the cloud infrastructure. In this architecture, smart meters are deployed over a georeferenced area where the control center has remote access over the Internet to these network devices. More specifically, these devices can be digitally controlled and monitored using simple web interfaces such as REST APIs. We modify the existing K-means algorithm to construct a hierarchical clustering topology that employs Wi-SUN technology for bi-directional communication between smart meters and data concentrators. Further, we develop a queuing model in which different priorities are assigned to each item of the critical and normal AMI traffic based on its latency and packet size. The critical AMI traffic is scheduled first using priority-based scheduling while the normal traffic is scheduled with a first-in–first-out scheduling scheme to ensure the QoS requirements of both traffic classes in the smart grid network. The numerical results demonstrate that the target coverage and connectivity requirements of all smart meters are fulfilled with the least number of data concentrators in the design. Additionally, the numerical results show that the architectural cost is reduced, and the bottleneck problem of the data concentrator is eliminated. Furthermore, the performance of the proposed framework is evaluated and validated on the CloudSim simulator. The simulation results of our proposed framework show efficient performance in terms of CPU utilization compared to a traditional framework that uses single-hop communication from smart meters to data concentrators with a first-in–first-out scheduling scheme

    QoS-Aware Cost Minimization Strategy for AMI Applications in Smart Grid Using Cloud Computing

    No full text
    Cloud computing coupled with Internet of Things technology provides a wide range of cloud services such as memory, storage, computational processing, network bandwidth, and database application to the end users on demand over the Internet. More specifically, cloud computing provides efficient services such as “pay as per usage”. However, Utility providers in Smart Grid are facing challenges in the design and implementation of such architecture in order to minimize the cost of underlying hardware, software, and network services. In Smart Grid, smart meters generate a large volume of different traffics, due to which efficient utilization of available resources such as buffer, storage, limited processing, and bandwidth is required in a cost-effective manner in the underlying network infrastructure. In such context, this article introduces a QoS-aware Hybrid Queue Scheduling (HQS) model that can be seen over the IoT-based network integrated with cloud environment for different advanced metering infrastructure (AMI) application traffic, which have different QoS levels in the Smart Grid network. The proposed optimization model supports, classifies, and prioritizes the AMI application traffic. The main objective is to reduce the cost of buffer, processing power, and network bandwidth utilized by AMI applications in the cloud environment. For this, we developed a simulation model in the CloudSim simulator that uses a simple mathematical model in order to achieve the objective function. During the simulations, the effects of various numbers of cloudlets on the cost of virtual machine resources such as RAM, CPU processing, and available bandwidth have been investigated in cloud computing. The obtained simulation results exhibited that our proposed model successfully competes with the previous schemes in terms of minimizing the processing, memory, and bandwidth cost by a significant margin. Moreover, the simulation results confirmed that the proposed optimization model behaves as expected and is realistic for AMI application traffic in the Smart Grid network using cloud computing

    A QoS-Aware Data Aggregation Strategy for Resource Constrained IoT-Enabled AMI Network in Smart Grid

    No full text
    Emerging Internet of Things (IoT) technologies and applications have enabled the Smart Grid Utility control center to connect, monitor, control, and exchange data between the smart appliances, smart meters (SMs), data concentrators (DCs) and control center server (CCS) over the Internet. In particular, DC receives different Advanced Metering Infrastructure (AMI) applications data from multiple SMs for processing, queuing, aggregation, and forwarding onward towards the CCS over the things networking. However, DCs are expensive component of the AMI network. Recently, SMs are used as relay-devices to accomplish a cost-effective AMI network infrastructure to avoid the DC placement and bottleneck problem. However, SMs are recourse constrained (limited CPU, RAM, storage, and network capacity) intelligent devices which faces numerous communication challenges during outage conditions and summer peak hours where bulk amount of data with different traffic rates and latency are exchanged with the Utility control center. Therefore, an efficient data aggregation is required at relay-devices to deal with high volume of data exchange rates in order to optimize the constrained-resources of the AMI network. In this article, we propose a hybrid data aggregation strategy implemented on an aggregator-head (AH) in the clustering topology which performs data aggregation on the Interval Meter Reading (IMR) application data. AH induction greatly reduces the workload of the cluster-heads (CHs), and efficiently utilizes the constrained-resource of AMI devices in a cost effective-manner. The proposed strategy is evaluated for different existing approaches using the CloudSim simulation tool. Experimental and simulation results are obtained and compared which show the effectiveness of the proposed strategy such that limited resources are optimized, CH workload is minimized, and QoS of AMI applications are maintained

    Lung’s Segmentation Using Context-Aware Regressive Conditional GAN

    No full text
    After declaring COVID-19 pneumonia as a pandemic, researchers promptly advanced to seek solutions for patients fighting this fatal disease. Computed tomography (CT) scans offer valuable insight into how COVID-19 infection affects the lungs. Analysis of CT scans is very significant, especially when physicians are striving for quick solutions. This study successfully segmented lung infection due to COVID-19 and provided a physician with a quantitative analysis of the condition. COVID-19 lesions often occur near and over parenchyma walls, which are denser and exhibit lower contrast than the tissues outside the parenchyma. We applied Adoptive Wallis and Gaussian filter alternatively to regulate the outlining of the lungs and lesions near the parenchyma. We proposed a context-aware conditional generative adversarial network (CGAN) with gradient penalty and spectral normalization for automatic segmentation of lungs and lesion segmentation. The proposed CGAN implements higher-order statistics when compared to traditional deep-learning models. The proposed CGAN produced promising results for lung segmentation. Similarly, CGAN has shown outstanding results for COVID-19 lesions segmentation with an accuracy of 99.91%, DSC of 92.91%, and AJC of 92.91%. Moreover, we achieved an accuracy of 99.87%, DSC of 96.77%, and AJC of 95.59% for lung segmentation. Additionally, the suggested network attained a sensitivity of 100%, 81.02%, 76.45%, and 99.01%, respectively, for critical, severe, moderate, and mild infection severity levels. The proposed model outperformed state-of-the-art techniques for the COVID-19 segmentation and detection cases

    Identification of Anemia and Its Severity Level in a Peripheral Blood Smear Using 3-Tier Deep Neural Network

    No full text
    The automatic detection of blood cell elements for identifying morphological deformities is still a challenging research domain. It has a pivotal role in cognition and detecting the severity level of disease. Using a simple microscope, manual disease detection, and morphological disorders in blood cells is mostly time-consuming and erroneous. Due to the overlapped structure of RBCs, pathologists face challenges in differentiating between normal and abnormal cell shape and size precisely. Currently, convolutional neural network-based algorithms are effective tools for addressing this issue. Existing techniques fail to provide effective anemia detection, and severity level prediction due to RBCs’ dense and overlapped structure, non-availability of standard datasets related to blood diseases, and severity level detection techniques. This work proposed a three tier deep convolutional fused network (3-TierDCFNet) to extract optimum morphological features and identify anemic images to predict the severity of anemia. The proposed model comprises two modules: Module-I classifies the input image into two classes, i.e., Healthy and Anemic, while Module-II detects the anemia severity level and categorizes it into Mild or Chronic. After each tier’s training, a validation function is employed to reduce the inappropriate feature selection. To authenticate the proposed model for healthy, anemic RBC classification and anemia severity level detection, a state-of-the-art anemic and healthy RBC dataset was developed in collaboration with Shaukat Khanum Hospital and Research Center (SKMCH&RC), Pakistan. To evaluate the proposed model, the training, validation, and test accuracies were computed along with recall, F1-Score, and specificity. The global results reveal that the proposed model achieved 91.37%, 88.85%, and 86.06% training, validation, and test accuracies with 98.95%, 98.12%, and 98.12% recall F1-Score and specificity, respectively
    corecore